Strict Generalization in Multilayered Perceptron Networks

نویسندگان

  • Debrup Chakraborty
  • Nikhil R. Pal
چکیده

Typically the response of a multilayered perceptron (MLP) network on points which are far away from the boundary of its training data is not very reliable. When test data points are far away from the boundary of its training data, the network should not make any decision on these points. We propose a training scheme for MLPs which tries to achieve this. Our methodology trains a composite network consisting of two subnetworks : a mapping network and a vigilance network. The mapping network learns the usual input-output relation present in the data and the vigilance network learns a decision boundary and decides on which points the mapping network should respond. Though here we propose the methodology for multilayered perceptrons, the philosophy is quite general and can be used with other learning machines also.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Modularity - a Concept for New Neural Network Architectures

This paper focuses on the powerful concept of modularity. It is descried how this concept is deployed in natural neural networks on an architectural as well as on a functional level. Furthermore different approaches for modular neural networks are discussed. Based on this a two layer modular neural system is introduced. The basic building blocks of the architecture are multilayer Perceptrons (M...

متن کامل

Modeling with Recurrent Neural Networks using Generalized Mean Neuron Model

Abstract This paper presents the use of generalized mean neuron model (GMN) in recurrent neural networks (RNNs). The GMN includes a new aggregation function based on the concept of generalized mean of all the inputs to the neuron. Learning is implemented on-line, based on input-output data using an alternative approach to recurrent backpropagation learning algorithm. The learning and generaliza...

متن کامل

On overfitting, generalization, and randomly expanded training sets

An algorithmic procedure is developed for the random expansion of a given training set to combat overfitting and improve the generalization ability of backpropagation trained multilayer perceptrons (MLPs). The training set is K-means clustered and locally most entropic colored Gaussian joint input-output probability density function (pdf) estimates are formed per cluster. The number of clusters...

متن کامل

Multilayer Kerceptron

Multilayer Perceptrons (MLP) are formulated within Support Vector Machine (SVM) framework by constructing multilayer networks of SVMs. The coupled approximation scheme can take advantage of generalization capabilities of the SVM and the combinatory feature of the hidden layer of MLP. The network, the Multilayer Kerceptron (MLK) assumes its own backpropagation procedure that we shall derive here...

متن کامل

Modified Functional Link Artificial Neural Network Ashok

In this work, a Modified Functional Link Artificial Neural Network (M-FLANN) is proposed which is simpler than a Multilayer Perceptron (MLP) and improves upon the universal approximation capability of Functional Link Artificial Neural Network (FLANN). MLP and its variants: Direct Linear Feedthrough Artificial Neural Network (DLFANN), FLANN and M-FLANN have been implemented to model a simulated ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2007